187 research outputs found

    Covert channel detection using Information Theory

    Full text link
    This paper presents an information theory based detection framework for covert channels. We first show that the usual notion of interference does not characterize the notion of deliberate information flow of covert channels. We then show that even an enhanced notion of "iterated multivalued interference" can not capture flows with capacity lower than one bit of information per channel use. We then characterize and compute the capacity of covert channels that use control flows for a class of systems.Comment: In Proceedings SecCo 2010, arXiv:1102.516

    Operational Rate-Distortion Performance of Single-source and Distributed Compressed Sensing

    Get PDF
    We consider correlated and distributed sources without cooperation at the encoder. For these sources, we derive the best achievable performance in the rate-distortion sense of any distributed compressed sensing scheme, under the constraint of high--rate quantization. Moreover, under this model we derive a closed--form expression of the rate gain achieved by taking into account the correlation of the sources at the receiver and a closed--form expression of the average performance of the oracle receiver for independent and joint reconstruction. Finally, we show experimentally that the exploitation of the correlation between the sources performs close to optimal and that the only penalty is due to the missing knowledge of the sparsity support as in (non distributed) compressed sensing. Even if the derivation is performed in the large system regime, where signal and system parameters tend to infinity, numerical results show that the equations match simulations for parameter values of practical interest.Comment: To appear in IEEE Transactions on Communication

    Exact Performance Analysis of the Oracle Receiver for Compressed Sensing Reconstruction

    Get PDF
    A sparse or compressible signal can be recovered from a certain number of noisy random projections, smaller than what dictated by classic Shannon/Nyquist theory. In this paper, we derive the closed-form expression of the mean square error performance of the oracle receiver, knowing the sparsity pattern of the signal. With respect to existing bounds, our result is exact and does not depend on a particular realization of the sensing matrix. Moreover, our result holds irrespective of whether the noise affecting the measurements is white or correlated. Numerical results show a perfect match between equations and simulations, confirming the validity of the result.Comment: To be published in ICASSP 2014 proceeding

    Source Coding with Side Information at the Decoder and Uncertain Knowledge of the Correlation

    No full text
    International audienceThis paper considers the problem of lossless source coding with side information at the decoder, when the correlation model between the source and the side information is uncertain. Four parametrized models representing the correlation between the source and the side information are introduced. The uncertainty on the correlation appears through the lack of knowledge on the value of the parameters. For each model, we propose a practical coding scheme based on non-binary Low Density Parity Check Codes and able to deal with the parameter uncertainty. At the encoder, the choice of the coding rate results from an information theoretical analysis. Then we propose decoding algorithms that jointly estimate the source vector and the parameters. As the proposed decoder is based on the Expectation-Maximization algorithm, which is very sensitive to initialization, we also propose a method to produce first a coarse estimate of the parameters

    Codage Distribué dans des Réseaux de Capteurs avec Connaissance Incertaine des Corrélations

    No full text
    National audienceCet article aborde le problème du codage d'une source X avec information adjacente Y disponible uniquement au décodeur dans le cas où P(Y|X) est mal connue au décodeur

    Complementary Graph Entropy, AND Product, and Disjoint Union of Graphs

    Full text link
    In the zero-error Slepian-Wolf source coding problem, the optimal rate is given by the complementary graph entropy H\overline{H} of the characteristic graph. It has no single-letter formula, except for perfect graphs, for the pentagon graph with uniform distribution G5G_5, and for their disjoint union. We consider two particular instances, where the characteristic graphs respectively write as an AND product \wedge, and as a disjoint union \sqcup. We derive a structural result that equates H()\overline{H}(\wedge \: \cdot) and H()\overline{H}(\sqcup \: \cdot) up to a multiplicative constant, which has two consequences. First, we prove that the cases where H()\overline{H}(\wedge \:\cdot) and H()\overline{H}(\sqcup \: \cdot) can be linearized coincide. Second, we determine H\overline{H} in cases where it was unknown: products of perfect graphs; and G5GG_5 \wedge G when GG is a perfect graph, using Tuncel et al.'s result for H(G5G)\overline{H}(G_5 \sqcup G). The graphs in these cases are not perfect in general

    Zero-Error Coding for Computing with Encoder Side-Information

    Full text link
    We study the zero-error source coding problem in which an encoder with Side Information (SI) g(Y)g(Y) transmits source symbols XX to a decoder. The decoder has SI YY and wants to recover f(X,Y)f(X,Y) where f,gf,g are deterministic. We exhibit a condition on the source distribution and gg that we call "pairwise shared side information", such that the optimal rate has a single-letter expression. This condition is satisfied if every pair of source symbols "share" at least one SI symbol for all output of gg. It has a practical interpretation, as YY models a request made by the encoder on an image XX, and g(Y)g(Y) corresponds to the type of request. It also has a graph-theoretical interpretation: under "pairwise shared side information" the characteristic graph can be written as a disjoint union of OR products. In the case where the source distribution is full-support, we provide an analytic expression for the optimal rate. We develop an example under "pairwise shared side information", and we show that the optimal coding scheme outperforms several strategies from the literature

    Low-complexity single-image super-resolution based on nonnegative neighbor embedding

    Get PDF
    This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time

    CNN-based Prediction of Partition Path for VVC Fast Inter Partitioning Using Motion Fields

    Full text link
    The Versatile Video Coding (VVC) standard has been recently finalized by the Joint Video Exploration Team (JVET). Compared to the High Efficiency Video Coding (HEVC) standard, VVC offers about 50% compression efficiency gain, in terms of Bjontegaard Delta-Rate (BD-rate), at the cost of a 10-fold increase in encoding complexity. In this paper, we propose a method based on Convolutional Neural Network (CNN) to speed up the inter partitioning process in VVC. Firstly, a novel representation for the quadtree with nested multi-type tree (QTMT) partition is introduced, derived from the partition path. Secondly, we develop a U-Net-based CNN taking a multi-scale motion vector field as input at the Coding Tree Unit (CTU) level. The purpose of CNN inference is to predict the optimal partition path during the Rate-Distortion Optimization (RDO) process. To achieve this, we divide CTU into grids and predict the Quaternary Tree (QT) depth and Multi-type Tree (MT) split decisions for each cell of the grid. Thirdly, an efficient partition pruning algorithm is introduced to employ the CNN predictions at each partitioning level to skip RDO evaluations of unnecessary partition paths. Finally, an adaptive threshold selection scheme is designed, making the trade-off between complexity and efficiency scalable. Experiments show that the proposed method can achieve acceleration ranging from 16.5% to 60.2% under the RandomAccess Group Of Picture 32 (RAGOP32) configuration with a reasonable efficiency drop ranging from 0.44% to 4.59% in terms of BD-rate, which surpasses other state-of-the-art solutions. Additionally, our method stands out as one of the lightest approaches in the field, which ensures its applicability to other encoders

    Context-adaptive neural network based prediction for image compression

    Get PDF
    International audienceThis paper describes a set of neural network architectures, called Prediction Neural Networks Set (PNNS), based on both fully-connected and convolutional neural networks, for intra image prediction. The choice of neural network for predicting a given image block depends on the block size, hence does not need to be signalled to the decoder. It is shown that, while fully-connected neural networks give good performance for small block sizes, convolutional neural networks provide better predictions in large blocks with complex textures. Thanks to the use of masks of random sizes during training, the neural networks of PNNS well adapt to the available context that may vary, depending on the position of the image block to be predicted. When integrating PNNS into a H.265 codec, PSNR-rate performance gains going from 1.46% to 5.20% are obtained. These gains are on average 0.99% larger than those of prior neural network based methods. Unlike the H.265 intra prediction modes, which are each specialized in predicting a specific texture, the proposed PNNS can model a large set of complex textures
    corecore